Goto

Collaborating Authors

 ai firm


Meta buys Chinese-founded AI start-up Manus

BBC News

Meta says it is acquiring the Chinese-founded AI firm Manus as it looks to boost the capabilities of its tech. Bloomberg analysts and The Wall Street Journal suggested the purchase could be worth more than $2bn (£1.48bn). Meta said the deal would help improve its own AI by giving people access to agents - tools which can do complex things with minimal user interaction such as planning trips or making presentations. Manus's exceptional talent will join Meta's team to deliver general-purpose agents across our consumer and business products, including Meta AI, it said in a blog post. Barton Crockett, analyst at Rosenblatt Securities, told Reuters it was a natural fit for Meta, which extended into boss Mark Zuckerberg's vision of personal AI using agents.


Disney and OpenAI have made a surprise deal – what happens next?

New Scientist

Disney and OpenAI have made a surprise deal - what happens next? Disney's famous Mickey Mouse character will soon be available for use in AI-generated videos The world's best-known AI company and the world's best-known entertainment firm have come to a surprise agreement to allow AI versions of some of the most iconic characters in film, TV and cartoons to be used in generative AI videos and images. Social media is dead - here's what comes next The Walt Disney Company has signed a deal with OpenAI that will allow the AI firm's Sora video generation tool and ChatGPT image creator to use more than 200 of Disney's most iconic characters. Meanwhile, Disney remains in dispute with another AI firm, Midjourney, over alleged infringement of their intellectual property (IP), claiming Midjourney aims to "blatantly incorporate and copy Disney's and Universal's famous characters" into their image generating tool. The characters now deemed fair game for OpenAI users include the likes of Mickey and Minnie Mouse, Simba and Mufasa from and Moana, as well as Marvel and Lucasfilm characters, including some of's most well-known names.


AI firms began to feel the legal wrath of copyright holders in 2025

New Scientist

The three years since the release of ChatGPT, OpenAI's generative AI chatbot, have seen huge changes in every part of our lives. Social media is dead - here's what comes next The most high-profile case was filed by Disney and Universal in June, both of whom alleged in a lawsuit that AI image generator Midjourney had been trained on their intellectual property, allowing users to create images that "blatantly incorporate and copy Disney's and Universal's famous characters". The latest on what's new in science and why it matters each day. In October, the Japanese government formally asked OpenAI, the company behind the Sora 2 AI video generator, to respect the intellectual property rights of its culture, including manga and popular video games such as those published by Nintendo. Sora 2 has faced further controversy due to its ability to create lifelike footage of real people.


UK share values 'most stretched' since 2008, Bank warns

BBC News

UK share values'most stretched' since 2008, Bank warns The Bank of England has warned of a sharp correction in the value of major tech companies with growing fears of an artificial intelligence (AI) bubble. It said share prices in the UK are close to the most stretched they have been since the 2008 global financial crisis, while equity valuations in the US are reminiscent of those before the dotcom bubble burst. The central bank's financial stability report warned valuations are particularly stretched for companies focused on AI. It said the growth of the sector in the next five years would be fuelled by trillions of dollars of debt, raising financial stability risks if the value of the companies falls. The Bank of England cited industry figures forecasting spending on AI infrastructure could top $5tn (£3.8tn).


AI firms must be clear on risks or repeat tobacco's mistakes, says Anthropic chief

The Guardian

The Anthropic chief executive, Dario Amodei, has flagged various concerns about its AI models recently. The Anthropic chief executive, Dario Amodei, has flagged various concerns about its AI models recently. AI firms must be clear on risks or repeat tobacco's mistakes, says Anthropic chief Artificial intelligence will become smarter than'most or all humans in most or all ways', says Dario Amodei Mon 17 Nov 2025 06.35 ESTLast modified on Mon 17 Nov 2025 07.25 EST Artificial intelligence companies must be transparent about the risks posed by their products or be in danger of repeating the mistakes of tobacco and opioid companies, according to the chief executive of the AI startup Anthropic. Dario Amodei, who runs the US company behind the Claude chatbot, said he believed AI would become smarter than "most or all humans in most or all ways" and urged his peers to "call it as you see it". Speaking to CBS News, Amodei said a lack of transparency about the impact of powerful AI would replay the errors of cigarette and opioid firms that failed to raise a red flag over the potential health damage of their own products.


Adviser to UK minister claimed AI firms would never have to compensate creatives

The Guardian

A senior ministerial aide said AI companies would never have to compensate creatives for using their content to train their systems, in a statement that has alarmed campaigners demanding Labour deliver a fairer deal for musicians, artists and writers from the tech industry. Kirsty Innes, recently appointed as a special adviser to Liz Kendall, the secretary of state for science, innovation and technology, said "whether or not you philosophically believe the big AI firms should compensate content creators, they in practice will never legally have to". The government is consulting on how creatives should be compensated by AI firms and only last week leading British artists including Mick Jagger, Kate Bush and Paul McCartney urged Keir Starmer to stand up for creators' human rights and protect their work. Innes, who previously worked at the Tony Blair Institute (TBI) thinktank, has deleted the statement, which she posted to X in February, seven months before she became a ministerial adviser. In the deleted posts, seen by the Guardian, she said: "A lot of this has already happened and it can continue to happen outside the UK, whatever our laws say."


AI firm says its technology weaponised by hackers

BBC News

But it is not just cyber-crime that the tech is being used for. Anthropic said "North Korean operatives" used its models to create fake profiles to apply for remote jobs at US Fortune 500 tech companies. The use of remote jobs to gain access to companies' systems has been known about for a while, but Anthropic says using AI in the fraud scheme is "a fundamentally new phase for these employment scams". It said AI was used to write job applications, and once the fraudsters were employed, it was used to help translate messages and write code. Often, North Korean workers are "are sealed off from the outside world, culturally and technically, making it harder for them to pull off this subterfuge," said Geoff White, co-presenter of the BBC podcast The Lazarus Heist.


Donald Trump Is Fairy-Godmothering AI

The Atlantic - Technology

Earlier today, Donald Trump unveiled his administration's "AI Action Plan"--a document that details, in 23 pages, the president's "vision of global AI dominance" and offers a road map for America to achieve it. AI companies such as OpenAI and Nvidia must be allowed to move as fast as they can. As the White House officials Michael Kratsios, David Sacks, and Marco Rubio wrote in the plan's introduction, "Simply put, we need to'Build, Baby, Build!'" The action plan is the direct result of an executive order, signed by Trump in the first week of his second term, that directed the federal government to produce a plan to "enhance America's global AI dominance." For months, the Trump administration solicited input from AI firms, civil-society groups, and everyday citizens.


AI firms 'unprepared' for dangers of building human-level systems, report warns

The Guardian

Artificial intelligence companies are "fundamentally unprepared" for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group. The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for "existential safety planning". One of the five reviewers of the FLI's report said that, despite aiming to develop artificial general intelligence (AGI), none of the companies scrutinised had "anything like a coherent, actionable plan" to ensure the systems remained safe and controllable. AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI "benefits all of humanity".


Musk's AI firm forced to delete posts after chatbot praises Hitler and makes antisemitic comments

Daily Mail - Science & tech

Elon Musk's AI firm has been forced to delete posts after the Grok chatbot praised Hitler and made a string of deeply antisemitic posts. The company xAI said it had removed'inappropriate' social media posts today following complaints from users. These posts followed Musk's announcement that he was taking measures to ensure the AI bot was more'politically incorrect'. Over the following days, the AI began repeatedly referring to itself as'MechaHitler' and said that Hitler would have'plenty' of solutions to'restore family values' to America. In a post on X, xAI wrote: 'We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. 'Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. 'xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.'